74 research outputs found
Facilitating WeBWorK Problem Authoring
The goal of this project was to create a graphical user interface to simplify the authoring of problems for WeBWorK, an online homework system. Interviews of potential users and research of similar user- interface applications aided in designing the interface. Tutorials for potential users were also created. Professors in the Mathematical Sciences Department at WPI tested the interface and/or the tutorials; improvements to the interface and tutorials were made based off their feedback
GCformer: An Efficient Framework for Accurate and Scalable Long-Term Multivariate Time Series Forecasting
Transformer-based models have emerged as promising tools for time series
forecasting.
However, these model cannot make accurate prediction for long input time
series. On the one hand, they failed to capture global dependencies within time
series data. On the other hand, the long input sequence usually leads to large
model size and high time complexity.
To address these limitations, we present GCformer, which combines a
structured global convolutional branch for processing long input sequences with
a local Transformer-based branch for capturing short, recent signals. A
cohesive framework for a global convolution kernel has been introduced,
utilizing three distinct parameterization methods. The selected structured
convolutional kernel in the global branch has been specifically crafted with
sublinear complexity, thereby allowing for the efficient and effective
processing of lengthy and noisy input signals. Empirical studies on six
benchmark datasets demonstrate that GCformer outperforms state-of-the-art
methods, reducing MSE error in multivariate time series benchmarks by 4.38% and
model parameters by 61.92%. In particular, the global convolutional branch can
serve as a plug-in block to enhance the performance of other models, with an
average improvement of 31.93\%, including various recently published
Transformer-based models. Our code is publicly available at
https://github.com/zyj-111/GCformer
OMSN and FAROS: OCTA Microstructure Segmentation Network and Fully Annotated Retinal OCTA Segmentation Dataset
The lack of efficient segmentation methods and fully-labeled datasets limits
the comprehensive assessment of optical coherence tomography angiography (OCTA)
microstructures like retinal vessel network (RVN) and foveal avascular zone
(FAZ), which are of great value in ophthalmic and systematic diseases
evaluation. Here, we introduce an innovative OCTA microstructure segmentation
network (OMSN) by combining an encoder-decoder-based architecture with
multi-scale skip connections and the split-attention-based residual network
ResNeSt, paying specific attention to OCTA microstructural features while
facilitating better model convergence and feature representations. The proposed
OMSN achieves excellent single/multi-task performances for RVN or/and FAZ
segmentation. Especially, the evaluation metrics on multi-task models
outperform single-task models on the same dataset. On this basis, a fully
annotated retinal OCTA segmentation (FAROS) dataset is constructed
semi-automatically, filling the vacancy of a pixel-level fully-labeled OCTA
dataset. OMSN multi-task segmentation model retrained with FAROS further
certifies its outstanding accuracy for simultaneous RVN and FAZ segmentation.Comment: 10 pages, 6 figures, submitted to IEEE Transactions on Medical
Imaging (TMI
FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
Recent studies have shown that deep learning models such as RNNs and
Transformers have brought significant performance gains for long-term
forecasting of time series because they effectively utilize historical
information. We found, however, that there is still great room for improvement
in how to preserve historical information in neural networks while avoiding
overfitting to noise presented in the history. Addressing this allows better
utilization of the capabilities of deep learning models. To this end, we design
a \textbf{F}requency \textbf{i}mproved \textbf{L}egendre \textbf{M}emory model,
or {\bf FiLM}: it applies Legendre Polynomials projections to approximate
historical information, uses Fourier projection to remove noise, and adds a
low-rank approximation to speed up computation. Our empirical studies show that
the proposed FiLM significantly improves the accuracy of state-of-the-art
models in multivariate and univariate long-term forecasting by
(\textbf{20.3\%}, \textbf{22.6\%}), respectively. We also demonstrate that the
representation module developed in this work can be used as a general plug-in
to improve the long-term prediction performance of other deep learning modules.
Code is available at https://github.com/tianzhou2011/FiLM/Comment: Accepted by The Thirty-Sixth Annual Conference on Neural Information
Processing Systems (NeurIPS 2022
- …